87 research outputs found

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    Full text link
    Feature extraction and dimensionality reduction are important tasks in many fields of science dealing with signal processing and analysis. The relevance of these techniques is increasing as current sensory devices are developed with ever higher resolution, and problems involving multimodal data sources become more common. A plethora of feature extraction methods are available in the literature collectively grouped under the field of Multivariate Analysis (MVA). This paper provides a uniform treatment of several methods: Principal Component Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis (CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions derived by means of the theory of reproducing kernel Hilbert spaces. We also review their connections to other methods for classification and statistical dependence estimation, and introduce some recent developments to deal with the extreme cases of large-scale and low-sized problems. To illustrate the wide applicability of these methods in both classification and regression problems, we analyze their performance in a benchmark of publicly available data sets, and pay special attention to specific real applications involving audio processing for music genre prediction and hyperspectral satellite images for Earth and climate monitoring

    Crop Yield Estimation and Interpretability With Gaussian Processes

    Get PDF
    This work introduces the use of Gaussian processes (GPs) for the estimation and understanding of crop development and yield using multisensor satellite observations and meteo- rological data. The proposed methodology combines synergistic information on canopy greenness, biomass, soil, and plant water content from optical and microwave sensors with the atmospheric variables typically measured at meteorological stations. A com- posite covariance is used in the GP model to account for varying scales, nonstationary, and nonlinear processes. The GP model reports noticeable gains in terms of accuracy with respect to other machine learning approaches for the estimation of corn, wheat, and soybean yields consistently for four years of data across continental U.S. (CONUS). Sparse GPs allow obtaining fast and compact solutions up to a limit, where heavy sparsity compromises the credibility of confidence intervals. We further study the GP interpretability by sensitivity analysis, which reveals that remote sensing parameters accounting for soil moisture and greenness mainly drive the model predictions. GPs finally allow us to identify climate extremes and anomalies impacting crop productivity and their associated drivers

    Structured Output SVM for Remote Sensing Image Classification

    Get PDF
    Traditional kernel classifiers assume independence among the classification outputs. As a consequence, each misclassification receives the same weight in the loss function. Moreover, the kernel function only takes into account the similarity between input values and ignores possible relationships between the classes to be predicted. These assumptions are not consistent for most of real-life problems. In the particular case of remote sensing data, this is not a good assumption either. Segmentation of images acquired by airborne or satellite sensors is a very active field of research in which one tries to classify a pixel into a predefined set of classes of interest (e.g. water, grass, trees, etc.). In this situation, the classes share strong relationships, e.g. a tree is naturally (and spectrally) more similar to grass than to water. In this paper, we propose a first approach to remote sensing image classification using structured output learning. In our approach, the output space structure is encoded using a hierarchical tree, and these relations are added to the model in both the kernel and the loss function. The methodology gives rise to a set of new tools for structured classification, and generalizes the traditional non-structured classification methods. Comparison to standard SVM is done numerically, statistically and by visual inspection of the obtained classification maps. Good results are obtained in the challenging case of a multispectral image of very high spatial resolution acquired with QuickBird over a urban are

    Non-linear System Identification with Composite Relevance Vector Machines

    Get PDF
    Nonlinear system identification based on relevance vector machines (RVMs) has been traditionally addressed by stacking the input and/or output regressors and then performing standard RVM regression. This letter introduces a full family of composite kernels in order to integrate the input and output information in the mapping function efficiently and hence generalize the standard approach. An improved trade-off between accuracy and sparsity is obtained in several benchmark problems. Also, the RVM yields confidence intervals for the predictions, and it is less sensitive to free parameter selectionPublicad
    • 

    corecore